Seminar Topics

www.seminarsonly.com

IEEE Seminar Topics

Gigabit Ethernet


Published on Feb 21, 2020

Abstract

Since its inception at Xerox Corporation in the early 1970s, Ethernet has been the dominant networking protocol. Of all current networking protocols, Ethernet has, by far, the highest number of installed ports and provides the greatest cost performance relative to Token Ring, Fiber Distributed Data Interface (FDDI), and ATM for desktop connectivity.

Fast Ethernet, which increased Ethernet speed from 10 to 100 megabits per second (Mbps), provided a simple, cost-effective option for backbone and server connectivity.

In 1995 ,the Fast Ethernet Standard was approved by the IEEE. Fast Ethernet provided 10 times higher bandwidth, and other new features such as full-duplex operation, and auto-negotiation. This established Ethernet as a scalable technology. The Fast Ethernet standard was pushed by an industry consortium called the Fast Ethernet Alliance. A similar alliance, called the Gigabit Ethernet Alliance was formed by 11 companies in May 1996 , soon after IEEE announced the formation of the 802.3z Gigabit Ethernet Standards project. At last count, there were over 95 companies in the alliance from the networking, computer and integrated circuit industries.

Gigabit Ethernet builds on top of the Ethernet protocol, but increases speed tenfold over Fast Ethernet to 1000 Mbps, or 1 gigabit per second (Gbps). This protocol, which was standardized in June 1998, promises to be a dominant player in high-speed local area network backbones and server connectivity. Since Gigabit Ethernet significantly leverages on Ethernet, customers will be able to leverage their existing knowledge base to manage and maintain gigabit networks.

Gigabit Ethernet employs the same Carrier Sense Multiple Access with Collision Detection (CSMA/CD) protocol, same frame format and same frame size as its predecessors. For the vast majority of network users, this means their existing network investment can be extended to gigabit speeds at reasonable initial cost without the need to re-educate their support staffs and users, and without the need to invest in additional protocol stacks or middleware. The result is low cost of ownership for users.

The new Gigabit Ethernet standards will be fully compatible with existing Ethernet installations. It will retain Carrier Sense Multiple Access/ Collision Detection (CSMA/CD) as the access method. It will support full-duplex as well as half duplex modes of operation. Initially, single-mode and multi mode fiber and short-haul coaxial cable will be supported. Standards for twisted pair cables are expected by 1999. The standard uses physical signaling technology used in Fiber Channel to support Gigabit rates over optical fibers.

Initially, Gigabit Ethernet will be deployed as a backbone in existing networks. It can be used to aggregate traffic between clients and "server farms", and for connecting Fast Ethernet switches. It can also be used for connecting workstations and servers for high - bandwidth applications such as medical imaging or CAD.
Architecture

In order to accelerate speeds from 100 Mbps Fast Ethernet up to 1 Gbps, several changes need to be made to the physical interface. It has been decided that Gigabit Ethernet will look identical to Ethernet from the data link layer upward. The challenges involved in accelerating to 1 Gbps have been resolved by merging two technologies together: IEEE 802.3 Ethernet and ANSI X3T11 FiberChannel. Figure 1 shows how key components from each technology have been leveraged to form Gigabit Ethernet.

Long-Wave and Short-Wave Lasers over Fiber-Optic Media

Two laser standards will be supported over fiber: 1000BaseSX (short-wave laser) and 1000BaseLX (long-wave laser). Short- and long-wave lasers will be supported over multimode fiber. Two types of multimode fiber are available: 62.5 and 50 micron-diameter fibers. Long-wave lasers will be used for single-mode fiber, because this fiber is optimized for long-wave laser transmission. There is no support for short-wave laser over single-mode fiber.

The key differences between the use of long- and short-wave laser technologies are cost and distance. Lasers over fiber-optic cable take advantage of variations in attenuation in a cable. At different wavelengths, "dips" in attenuation are found over the cable. Short- and long-wave lasers take advantage of those dips and illuminate the cable at different wavelengths.

Short-wave lasers are readily available because variations of these lasers are used in compact-disc technology. Long-wave lasers take advantage of attenuation dips at longer wavelengths in the cable. The net result is that although short-wave lasers will cost less, they transverse a shorter distance. In contrast, long-wave lasers are more expensive but they transverse longer distances.

Single-mode fiber has been traditionally used in the networking cable plants to achieve long distance. In Ethernet, for example, single-mode cable ranges reach up to 10 km. Single-mode fiber, using a 9-micron core and 1300-nanometer laser, demonstrate the highest-distance technology. The small core and lower-energy laser elongate the wavelength of the laser and allow it to transverse greater distances. This setup enables single-mode fiber to reach the greatest distances of all media with the least reduction in noise.











Are you interested in this topic.Then mail to us immediately to get the full report.

email :- contactv2@gmail.com

Related Seminar Topics